I am trying to use VSCode to do some Houdini tool development and I am going to need to access a bunch of hou modules, but I cannot seem to get it working right. I found this video [youtu.be] which seems to address the issue, but I have followed it to the t and I am still getting a reportMissingImports error on the import hou line. What am I doing wrong here?
Found 51 posts.
Search results Show results as topic list.
Technical Discussion » Using hou library in VSCode?
- Adam F
- 51 posts
- Offline
PDG/TOPs » Determining where a cook command comes from?
- Adam F
- 51 posts
- Offline
I am trying to find a way to determine if a node cook is being initiated on the node itself vs requested from a downstream node. I have a feeling that there is something to do with event handlers, but can't seem to figure out how to get them to trigger properly. I am trying to either:
I feel like there should be a way to glean the information, but just can't seem to find it. I do have something in place if I cannot automate this, but I REALLY want to automate this so I can have it in my toolbox for the future.
Here is the onGenerate() that I am using in a python processor currently:
This works about halfway to what I need. When I cook without the outputs present or without the JSON files for the work items, it processes writeWorkItemJSONs and has the work item dependencies going all the way up the chain. If the JSON files are present and the outputs are there, the work items are created from the JSONs and lack the dependencies, but the input node is still cooked. I need a way to prevent the nodes above this processor from cooking in the loadWorkItemJSONs() function. I saw that the processor is supposed to have an onPreCook() method I can invoke (here [www.sidefx.com]), but the documentation is a little light on details.
I also found that there is a filter on the pdg.graphContext that is supposed to affect the graph cooking, but I'm trying to figure out how to use it and where it would be best to do so. I have some ideas, and will report back if anything comes of them.
- Flip a switch TOP based on the cooking request source
- Adjust how a Python Processor works based on the result request source
I feel like there should be a way to glean the information, but just can't seem to find it. I do have something in place if I cannot automate this, but I REALLY want to automate this so I can have it in my toolbox for the future.
Here is the onGenerate() that I am using in a python processor currently:
import json, hou, os def writeWorkItemJSONs(JSONPath, context): try: os.mkdir(JSONPath) except Exception as e: pass for upstream_item in upstream_items: jsonWorkItem = context.serializeWorkItemToJSON(upstream_item) with open(f'{JSONPath}/WorkItem.{upstream_item.index}.json', "w") as f: f.write(jsonWorkItem) new_item = item_holder.addWorkItem(parent=upstream_item, inProcess=True) def loadWorkItemJSONs(JSONPath): for file in os.listdir(JSONPath): new_item = item_holder.addWorkItemFromJSONFile(f'{JSONPath}/{file}') context = self.context hipPath = "/".join(hou.hipFile.path().split('/')[:-1]) JSONPath = hipPath + '/workItem_JSON' try: if len(os.listdir(JSONPath)) > 0: loadWorkItemJSONs(JSONPath) else: writeWorkItemJSONs(JSONPath, context) except Exeption as e: writeWorkItemJSONs(JSONPath, context)
This works about halfway to what I need. When I cook without the outputs present or without the JSON files for the work items, it processes writeWorkItemJSONs and has the work item dependencies going all the way up the chain. If the JSON files are present and the outputs are there, the work items are created from the JSONs and lack the dependencies, but the input node is still cooked. I need a way to prevent the nodes above this processor from cooking in the loadWorkItemJSONs() function. I saw that the processor is supposed to have an onPreCook() method I can invoke (here [www.sidefx.com]), but the documentation is a little light on details.
I also found that there is a filter on the pdg.graphContext that is supposed to affect the graph cooking, but I'm trying to figure out how to use it and where it would be best to do so. I have some ideas, and will report back if anything comes of them.
Edited by Adam F - 2023年5月19日 16:45:38
Technical Discussion » Is there a way to stop VDB Advect pruning activated regions
- Adam F
- 51 posts
- Offline
Soothsayer
You can change the vdb vector type with a primitive properties node.
THANK YOU! That did take care of being able to use the VDB Combine node to force the maximum size of the VDB grid. I think I can work around it, but I would like to be able to eliminate that maximum. Any ideas there?
Technical Discussion » Is there a way to stop VDB Advect pruning activated regions
- Adam F
- 51 posts
- Offline
I am attempting to build a physically accurate simulation for an ionic flow engine inside of Houdini and I am running into a rather infuriating issue. I am mostly just playing right now trying to get something developed that I can then adapt into utilizing the real world physics, so nobody needs to worry about all of that yet (and hopefully shouldn't as that is just calculating forces).
The methodology that I am attempting to implement is to utilize various VDB volumes to advect the simulation based on a velocity volume. I'm currently calculating the velocity VDB using a gradient of a clipped section of a larger whole. So when the VDB Advect node takes over, it immediately prunes all padding space I try to give it and additionally clamps all volumes to a maximum of the bounding box of the original input geometry. If I make the input geometry have a bigger original bound, I get bigger. But I have not been able to find a way to either make it so it doesn't have a maximum nor resize at the beginning of a step and keep it.
Ideally, I want to use the bounds of the density volume at the start of each step and expand on them, so the volumes never run into invisible walls as they advect, but also keep it tight enough that the VDBs aren't unnecessarily bloated. I cannot even start working on the actual simulation solver until I get this working, so it is quite frustrating.
Also, I'm not using DOPs because I am anticipating that I will need the finite control over the calculations ahead of the advection steps. I may not even be able to use project non-divergent and have to rely on building the math such that it can't diverge.
I have attached my file from today's attempts. I commented most of the nodes with what I am using them for as well as singling out where there are bad behaviors that I can't seem to solve. There is one with VDB Combine not liking the types of vectors I have in the VDBs... so today I learned that a Vector3 on a VDB is not always the same... frustrating.
The methodology that I am attempting to implement is to utilize various VDB volumes to advect the simulation based on a velocity volume. I'm currently calculating the velocity VDB using a gradient of a clipped section of a larger whole. So when the VDB Advect node takes over, it immediately prunes all padding space I try to give it and additionally clamps all volumes to a maximum of the bounding box of the original input geometry. If I make the input geometry have a bigger original bound, I get bigger. But I have not been able to find a way to either make it so it doesn't have a maximum nor resize at the beginning of a step and keep it.
Ideally, I want to use the bounds of the density volume at the start of each step and expand on them, so the volumes never run into invisible walls as they advect, but also keep it tight enough that the VDBs aren't unnecessarily bloated. I cannot even start working on the actual simulation solver until I get this working, so it is quite frustrating.
Also, I'm not using DOPs because I am anticipating that I will need the finite control over the calculations ahead of the advection steps. I may not even be able to use project non-divergent and have to rely on building the math such that it can't diverge.
I have attached my file from today's attempts. I commented most of the nodes with what I am using them for as well as singling out where there are bad behaviors that I can't seem to solve. There is one with VDB Combine not liking the types of vectors I have in the VDBs... so today I learned that a Vector3 on a VDB is not always the same... frustrating.
PDG/TOPs » Is there a way to reference work items in remote TOP Network
- Adam F
- 51 posts
- Offline
Ok, another update. The above code works flawlessly, I had an error in the tool which was causing the blocking issue. Dumb mistake. Fixed it and it was perfect.
PDG/TOPs » Is there a way to reference work items in remote TOP Network
- Adam F
- 51 posts
- Offline
Ok, so I have gotten something that is moderately useful (for anyone who finds this in the future). I have written what amounts to my own partition script and execute in the Python Module of my HDA, completely abandoning using PDG to manage its own work items.
I am still working on a couple of bugs regarding blocking, sometimes it works, sometimes it doesn't. This is triggered from a Pre-frame script, so it should be good. Mantra renders are perfect, geometry caches aren't blocking for the work items to finish.
def process(node): print("Starting PDG Process") TOPNode = node.parm("toppath").evalAsNode() TOPNode.generateStaticWorkItems(True) PDGNode = TOPNode.getPDGNode() PDGContext = PDGNode.context partCount = int(node.parm("dataHold").eval()['machineCount']) partLength = math.ceil(len(PDGNode.workItems)/machineCnt) print("\t".join([f'{n}: {v}' for n, v in zip(["TOPNode", "PDGNode", "PDGContext", "partCount", "partLength"], [TOPNode, PDGNode, PDGContext, partCount, partLength])])) partList = [PDGNode.workItems[i * partLength:(i + 1) * partLength] for i in range((len(PDGNode.workItems) + partLength - 1) // partLength )] print(partList[hou.intFrame()-1]) PDGContext.cookItems(True, [a.id for a in partList[hou.intFrame()-1]], PDGNode.name)
I am still working on a couple of bugs regarding blocking, sometimes it works, sometimes it doesn't. This is triggered from a Pre-frame script, so it should be good. Mantra renders are perfect, geometry caches aren't blocking for the work items to finish.
PDG/TOPs » Is there a way to reference work items in remote TOP Network
- Adam F
- 51 posts
- Offline
Ok, so here is some of the code I am trying to mess with in a Python Processor:
This is the onGenerate code, which works beautifully until I try to cook the work items. At that point, they all go to a pdg.workItemState.Waiting and sit there. Even if I add code to the onCook function it does not actually seem to call it when I try to cook, it just waits. What am I missing here?
import hou controlNode = hou.node("../../") target = controlNode.parm("toppath").eval() targetTOP = hou.node(target) pdgNode = targetTOP.getPDGNode() for upstream_item in pdgNode.workItems: new_item = item_holder.addWorkItem(parent=upstream_item, inProcess=True) new_item.setStringAttrib("source_node", target) new_item.setIntAttrib("source_id", upstream_item.id)
This is the onGenerate code, which works beautifully until I try to cook the work items. At that point, they all go to a pdg.workItemState.Waiting and sit there. Even if I add code to the onCook function it does not actually seem to call it when I try to cook, it just waits. What am I missing here?
PDG/TOPs » Is there a way to reference work items in remote TOP Network
- Adam F
- 51 posts
- Offline
I am building an HDA for use with managing TOP networks in a very specific way, but I cannot seem to figure out how to accomplish referencing the target TOP network. I have tried the two nodes which appear like they should work, TOP Fetch or Work Item Import. When I target the TOP network that I'm using to test this with the TOP Fetch and try to generate the work items it just sits there processing. With Work Item Import it will cook the target network before importing unless I have the Generate Only box ticked, but then I cannot get the work items to cook the target network work items after the fact.
What the HDA I am trying to build is aiming to accomplish is to allow a user to target a TOP network/node and bucket the work items for that node into partitions, then process a single partition on a remote machine which loads the file with hqueue and executes a command. I have the partitioning working how I want, but cannot make the partitions within my HDA cook the TOP work items in the external TOP Network.
In this use case, the hqueue scheduler does not seem to be an option as the machine that is running the hqueue commands is inaccessible and not setup for this.
What the HDA I am trying to build is aiming to accomplish is to allow a user to target a TOP network/node and bucket the work items for that node into partitions, then process a single partition on a remote machine which loads the file with hqueue and executes a command. I have the partitioning working how I want, but cannot make the partitions within my HDA cook the TOP work items in the external TOP Network.
In this use case, the hqueue scheduler does not seem to be an option as the machine that is running the hqueue commands is inaccessible and not setup for this.
Solaris and Karma » Getting some really weird errors and node breaks in LOPs
- Adam F
- 51 posts
- Offline
mtucker
The python API for USD that ships with Houdini _is_ the Pixar python API for USD. SideFX did not implement any of that API, or change it in any way from the Pixar implementation. If you have specific performance concerns, and especially if you see differences in behavior/performance between using the USD API that ships with Houdini compared to a built-from-scratch USD library Python API, please let us know (with specific steps to reproduce).
OK, so I have somehow completely broken my Houdini 19.0 install of Houdini. I have completely uninstalled all versions of Houdini, Launcher, Engine, etc., deleted my Houdini19.0 folder in my Documents folder, then reinstalled 19.0.657 (previously I had 19.0.531, which is the one I broke) and I am still getting the below errors when I open the stage context. I have no idea what is going on or where it is pulling from to try to load the Python libraries. There are 0 19.0 directories in my Program Files/SideFX/ directory before I install, and the installed ones are still corrupting for some reason.
Traceback (most recent call last):
File "C:\PROGRA~1\SIDEEF~1\HOUDIN~1.657\python37\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py", line 142, in _import
return original_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'pxr.CameraUtil'
Traceback (most recent call last):
File "C:\PROGRA~1\SIDEEF~1\HOUDIN~1.657\python37\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py", line 142, in _import
return original_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'pxr.PxOsd'
Traceback (most recent call last):
File "C:\PROGRA~1\SIDEEF~1\HOUDIN~1.657\python37\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py", line 142, in _import
return original_import(name, *args, **kwargs)
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.657/houdini/python3.7libs\husd\UsdHoudini\__init__.py", line 24, in <module>
import _usdHoudini
File "C:\PROGRA~1\SIDEEF~1\HOUDIN~1.657\python37\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py", line 142, in _import
return original_import(name, *args, **kwargs)
SystemError: initialization of _usdHoudini raised unreported exception
Problem loading FreeCamera.usda files:
"Warning: Import failed for module 'pxr.CameraUtil'!
Import failed for module 'pxr.PxOsd'!
Import failed for module 'husd.UsdHoudini'!"
Traceback (most recent call last):
File "C:\PROGRA~1\SIDEEF~1\HOUDIN~1.657\python37\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py", line 142, in _import
return original_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'pxr.Garch'
Traceback (most recent call last):
File "C:\PROGRA~1\SIDEEF~1\HOUDIN~1.657\python37\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py", line 142, in _import
return original_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'pxr.Glf'
Traceback (most recent call last):
File "C:\PROGRA~1\SIDEEF~1\HOUDIN~1.657\python37\lib\site-packages-forced\shiboken2\files.dir\shibokensupport\__feature__.py", line 142, in _import
return original_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'pxr.UsdImagingGL'
Solaris and Karma » Getting some really weird errors and node breaks in LOPs
- Adam F
- 51 posts
- Offline
mtuckerWill do. I will do a wipe my install and reload to be sure it is cleared and run my test code again.
The python API for USD that ships with Houdini _is_ the Pixar python API for USD. SideFX did not implement any of that API, or change it in any way from the Pixar implementation. If you have specific performance concerns, and especially if you see differences in behavior/performance between using the USD API that ships with Houdini compared to a built-from-scratch USD library Python API, please let us know (with specific steps to reproduce).
Solaris and Karma » Getting some really weird errors and node breaks in LOPs
- Adam F
- 51 posts
- Offline
mtuckerYeah, I was finding better documentation on the Pixar site than on SideFX's and was struggling with getting the information out of it that I needed without using Pixar's implementation. I would love to just reinstall and go back to using pure Hython for it, but I worry for SideFX's optimization. The use case I have NEEDS to be optimized for it to be effective, and I have run into perfomance issues using other methods in Hython's (non-USD) object interaction API, which thus far the Pixar library has not run into when operating on USDs.
Whay Pixar library? Houdini ships with the Pixar USD python bindings already...
Solaris and Karma » Getting some really weird errors and node breaks in LOPs
- Adam F
- 51 posts
- Offline
I added the Pixar library to Hython. I needed it for a project that requires Hython to be run.
Solaris and Karma » Getting some really weird errors and node breaks in LOPs
- Adam F
- 51 posts
- Offline
Ok, so I'm going to do the evil thing and just post the errors, as I am unable to share the scene itself as it is something I am debugging for another artists and I do not have permission to share.
Whenever I activate a SOP Import node in this scene I get this error:
On the light node I am getting this failure:
I am able to fix that one by deleting the function out of the parameter it is on, but that feels really weird as that should be a piece of SideFX code. Also, the camera node gets the same error on the same function, but if I fix that one I get the following:
The funny thing is, I cannot find the attribute
Then finally on the Karma node I am getting this error, which I cannot figure out why the node is throwing one:
The artist has shown that the shot renders on his box, but I am getting these massive, low level errors which are preventing me from fully debugging the shot properly. Any ideas. I have tried rebuilding the nodes in scene and in a clean scene. The camera still throws the python error and the karma issue still happens with a clean scene. I did install a copy of the Pixar Python library for some other testing on a different project, could that have broken things?
Whenever I activate a SOP Import node in this scene I get this error:
Error while authoring a USD Preview Surface shader
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/houdini/python3.7libs\hou.py", line 56236, in editableStage
return _hou.LopNode_editableStage(self)
RuntimeError
On the light node I am getting this failure:
Unable to evaluate expression (
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/houdini/python3.7libs\loputils.py", line 576, in getMetersPerUnit
stage = inputnode.stage()
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/houdini/python3.7libs\hou.py", line 56268, in stage
return _hou.LopNode_stage(self, output_index, apply_viewport_overrides, ignore_errors, use_last_cook_context_options, apply_post_layers)
RuntimeError
(/stage/domelight1/xn__houdiniguidescale_s3a)).
Unable to evaluate expression (
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/houdini/python3.7libs\loputils.py", line 567, in getConvertedCameraParmValue
parmvalue = convertFromMillimetersToCameraUnits(lop, parmvalue)
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/houdini/python3.7libs\loputils.py", line 545, in convertFromMillimetersToCameraUnits
stage = inputnode.stage()
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/houdini/python3.7libs\hou.py", line 56268, in stage
return _hou.LopNode_stage(self, output_index, apply_viewport_overrides, ignore_errors, use_last_cook_context_options, apply_post_layers)
RuntimeError
(/stage/camera1/verticalApertureOffsetConverted)).
The funny thing is, I cannot find the attribute
verticalAperatureOffsetConverted
on the camera node at all to fix that one.Then finally on the Karma node I am getting this error, which I cannot figure out why the node is throwing one:
Invalid source /stage/karma1/karmarenderproperties/renderproduct
Error: Unable to evaluate expression (
Traceback (most recent call last):
File "<stdin>", line 4, in expression
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/houdini/python3.7libs\loputils.py", line 287, in globPrimPaths
return rule.expandedPaths(lop)
File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/houdini/python3.7libs\hou.py", line 57600, in expandedPaths
return _hou.LopSelectionRule_expandedPaths(self, lopnode, return_ancestors, fallback_to_new_paths)
RuntimeError
(/stage/karma1/karmarenderproperties/renderproduct/rendervars))..
The artist has shown that the shot renders on his box, but I am getting these massive, low level errors which are preventing me from fully debugging the shot properly. Any ideas. I have tried rebuilding the nodes in scene and in a clean scene. The camera still throws the python error and the karma issue still happens with a clean scene. I did install a copy of the Pixar Python library for some other testing on a different project, could that have broken things?
PDG/TOPs » PDG Work Items generating locally, not remotely
- Adam F
- 51 posts
- Offline
tpetrickThanks. Those logs showed up locally, but also did not show up on the farm. I know I had found a way to dump logs to file in the past and had it work. Definitely one of those settings that I found once and haven't been able to find since.
Those logs are printed to the standard output of the shell that launched the Houdini process, not to a file on disk. They also need to be set in the environment when the Houdini process starts up.
PDG/TOPs » PDG Work Items generating locally, not remotely
- Adam F
- 51 posts
- Offline
tpetrick
Yep, it's documented in the list of env vars: https://www.sidefx.com/docs/houdini/ref/env#houdini_pdg_node_debug [www.sidefx.com]
There are a number of PDG-specific debug switches that can be enabled -- they should all be described on that page as well.
Ok, so I have turned on several of the debug flags in my environment locally and cannot figure out where the logs are supposed to be going. I should be getting inundated with values somewhere, but I am not getting any log files and my python console is not getting anything else. I have Verbose Logging checked on the scheduler, but nothing seems to be getting output.
PDG/TOPs » PDG Work Items generating locally, not remotely
- Adam F
- 51 posts
- Offline
tpetrick
If you're using the same graph and work items aren't being generated, it sounds like one of the nodes in your graph has errors. If you're cooking via a script in a headless session, the easiest way to check for that is to set HOUDINI_PDG_NODE_DEBUG=2 in your process environment, which will enable print outs for any node errors/warnings, as well as node cook status messages. For example, in a simple graph I created with an expression error in the first node:[13:07:12] PDG: STATUS NODE ERROR (genericgenerator1)
Unable to evaluate expression (Expression stack error (/obj/topnet1/genericgenerator1/itemcount)).
[13:07:12] PDG: STATUS NODE GENERATED (genericgenerator1)
[13:07:12] PDG: STATUS NODE COOKED (genericgenerator1)
Without seeing your .hip file, it's hard to provide any additional suggestions.
Honestly, this is exactly what I have been trying to figure out how to do. Is this documented anywhere? I had figured out months ago how to get an actual text file of all of the processes to dump out, but haven't been able to find the docs since.
As for the file, unfortunately it is under NDA, so I can only share sanitized messages and code.
PDG/TOPs » PDG Work Items generating locally, not remotely
- Adam F
- 51 posts
- Offline
I am trying to build a pipeline tool for use on a remote farm, but am having some issues. I know the code I am trying to use generally works perfectly well both locally and on the farm, but the work items aren't generated remotely.
Here is the code which is used to process the remote scene. It is in the preframe script of a ROP. It has worked flawlessly in the past.
and here is the expected output. The last few lines are from a python node inside of an HDA built for TOPs.
And here is what I get on the remote server:
Here is the code which is used to process the remote scene. It is in the preframe script of a ROP. It has worked flawlessly in the past.
import hou import pdg node = hou.node('/obj/ropnet1/topnet1/Partition') print(node) list(map(lambda x: x.dirty(True), node.getPDGGraphContext().graph.nodes())) node.cookWorkItems(block=True, generate_only=True) print(list(map(lambda x: x.workItems, node.getPDGGraphContext().graph.nodes()))) pdgNode = node.getPDGNode() print(pdgNode.workItems) print(f'Starting Renders') pdgNode.context.cookItems(True, [wi.id for wi in pdgNode.workItems if wi.index == hou.frame()], pdgNode.name) print('PDG Network Cooked')
and here is the expected output. The last few lines are from a python node inside of an HDA built for TOPs.
12:20:48: localscheduler: Local Scheduler: Max Slots=7, Working Dir=D:/workingDir [[<GenericData name='Convert_genericgenerator1_850', id=850 at 0x0000000007b86b00>], [<GenericData name='Convert_switch1_852', id=852 at 0x0000000007b86080>], [<GenericDat a name='Partition_854', id=854 at 0x0000000007b87580>], [<GenericData name='Convert_attributecreate1_851', id=851 at 0x0000000007b84b80>], [<GenericData name='Convert_p ythonprocessor1_853', id=853 at 0x0000000007b84100>], []] [<GenericData name='Partition_854', id=854 at 0x0000000007b87580>] Starting Renders 12:20:58: localscheduler: Local Scheduler: Max Slots=7, Working Dir=D:/workingDir C:/PROGRA~1/SIDEEF~1/HOUDIN~1.561/bin/iconvert.exe D:/workingDir/test00001.rat Converting D:/workingDir/test00001.png to .rat format PDG Network Cooked
And here is what I get on the remote server:
16:33:48: localscheduler: Local Scheduler: Max Slots=15, Working Dir=/data/input/workingDir/ [[], [], [], [], []] [] Starting Renders 16:33:58: localscheduler: Local Scheduler: Max Slots=15, Working Dir=/data/input/workingDir PDG Network Cooked
Edited by Adam F - 2022年8月9日 12:49:36
Technical Discussion » Building a custom TOPs Python Processor in a .py file
- Adam F
- 51 posts
- Offline
Ok, solved several other issues. Looks like most of them have been the fact that I had to find the right way to instantiate the objects. Which brings me to pdg.WorkItemHolder. It has no constructor, so when I use
I really need to get this working, so if there isn't any help here I will have to make a ticket shortly. I'm sure it is possible, just need to figure out how to get all the right objects for the arguments in the onGenerate() code.
item_holder = pdg.workItemHolder
it is building some sort of object with the type
. This ends up not being the right type for when I go to call the member function pdg.WorkItemHolder.addWorkItem()
from the created object, the type is wrong for self
and it throws an error saying that it does not have the self
argument. When I try to use pdg.workItemHolder()
it throws an error saying there isn't a constructor. Through Python it should still work, but Hython is throwing a fit. I have used the inspect library to dig into the code which is running in the frame and stack and have narrowed down the information for pdg.WorkItemHolder.__init__()
to being held in C:/Program Files/Side Effects Software/Houdini 19.0.561/houdini/python3.7libs/_pdg.pyd
, which I am obviously not getting into without violating EULA, so that is my end point for investigation. I really need to get this working, so if there isn't any help here I will have to make a ticket shortly. I'm sure it is possible, just need to figure out how to get all the right objects for the arguments in the onGenerate() code.
Edited by Adam F - 2022年5月10日 21:14:54
Technical Discussion » Building a custom TOPs Python Processor in a .py file
- Adam F
- 51 posts
- Offline
So I realized after submitting that I might need to register the custom processor as a node and then add it to the graph using .addNode() as I did previously to get it to work. Unfortunately this created an eerily similar issue error:
It definitely has the positional argument, and it is correct. pdg.nodeType.Processor is definitely the right one and definitely is in the right place.
def registerTypes(type_registry): type_registry.registerNode(CustomProcessor, pdg.nodeType.Processor, name="customprocessor", label="Custom Processor", category="Custom") registerTypes(pdg.TypeRegistry)
Python error: Traceback (most recent call last):
File "", line 24, in
File "", line 22, in registerTypes
TypeError: _registerNode() missing 1 required positional argument: 'node_type'
It definitely has the positional argument, and it is correct. pdg.nodeType.Processor is definitely the right one and definitely is in the right place.
Technical Discussion » Building a custom TOPs Python Processor in a .py file
- Adam F
- 51 posts
- Offline
Ok, so I am trying to build a custom processor for running a PDG process in Hython and I am having some issues. I have tested the workflow using TOPs and the methodology I am trying to use works perfectly in a Python Processor TOP node. I was trying to figure out how to get it to work in a standalone .py file and came across this page [www.sidefx.com] which details creating a custom processor, complete with the on generate and onCookItem callback info. So I set to coding. Everything seems to be right, all of the code 'runs' and all of the object types pass through my function calls and the data at the error is all correct. The work items that are being run through are the correct ones from the File Pattern node, the code for the loop is identical to what is in the onGenerate() function on the Python Processor node in my TOPs network, and the item_holder is the right object and has the functions. Ideas?
import pdg from pdg.processor import PyProcessor class CustomProcessor(PyProcessor): def __init__(self, node): PyProcessor.__init__(self,node) def onGenerate(self, item_holder, upstream_items, generation_type): for upstream_item in upstream_items: new_item = item_holder.addWorkItem(parent=upstream_item, inProcess=True) # ^^^ Error occurs here ^^^ return pdg.result def onCookTask(self, work_item): #proprietary code goes here pass pathMed = '$HIP/geo/hythonTest.filecache1.*.bgeo.sc' whereItWorks = pdg.GraphContext("testBed") whatWorks = whereItWorks.addScheduler("localscheduler") findem = whereItWorks.addNode("filepattern") whereItWorks.setValue(findem.name, 'pattern', pathMed, 0) findem.cook(True) item_holder = pdg.WorkItemHolder test = CustomProcessor(findem).onGenerate(item_holder, findem.workItems, pdg.generationType.Static)
Error: Python error: Traceback (most recent call last):
File "", line 66, in
File "", line 39, in onGenerate
TypeError: _addWorkItem() missing 1 required positional argument: 'self'
Edited by Adam F - 2022年5月5日 18:17:02
-
- Quick Links